Skip to content

Conversation

@lpyhdzx
Copy link

@lpyhdzx lpyhdzx commented Oct 24, 2025

This is a PR for parameter-efficient fine-tuning method, MPOP[1], which introduces Matrix Product Operator (MPO) integration with LoRA (we call lorampo here) to improve parameter efficiency and training stability.

Key changes:

  • Implement lorampo method in MLP layers using MPO-based initialization
  • Add lora_mpo configuration option to LoraConfig
  • Update training scripts and utilities to support training with lorampo
  • Add example training script for lorampo experiments

Features:

  • Integration with existing LoRA infrastructure
  • Support for MPO-based weight initialization
  • Backward compatibility with standard LoRA

This enhancement allows users to leverage MPO decomposition for more efficient parameter adaptation while maintaining the simplicity of LoRA usage.

[1] Liu et al. Enabling Lightweight Fine-tuning for Pre-trained Language Model Compression based on Matrix Product Operators. ACL 2021

lpyhdzx and others added 2 commits October 24, 2025 17:04
This PR introduces Matrix Product Operator (MPO) integration with LoRA to improve parameter efficiency and training stability.

Key changes:
- Implement lorampo_init method in LoraLayer for MPO-based initialization
- Add lora_mpo configuration option to LoraConfig
- Update training scripts and utilities to support MPO-LoRA
- Add example training script for MPO-LoRA experiments

Features:
- Integration with existing LoRA infrastructure
- Support for MPO-based weight initialization
- Backward compatibility with standard LoRA

This enhancement allows users to leverage MPO decomposition for more efficient parameter adaptation while maintaining the simplicity of LoRA usage.
- Resolved conflict in src/peft/tuners/lora/config.py
- Kept both lora_mpo and ensure_weight_tying fields
- Integrated all main branch updates including new methods (DeLoRA, OSF, WaveFT)
- Updated CI workflows and documentation
@lpyhdzx
Copy link
Author

lpyhdzx commented Oct 24, 2025

For your convenience, here are some friendly points to help with a quick check.

(1) How to quickly test this PR

You can directly test the lorampo script via peft/examples/sft/run_peft_mpo.sh.

Simply modify the following two lines:
export model_path="YOUR_MODEL_PATH" # e.g., "Qwen/Qwen3-0.6B"
export output_dir="./" # e.g., "./checkpoints"
Then run the script to verify functionality.

(2) What tests have been done

We validated the PR under the following setting:
• Model: Qwen3-0.6B
• Dataset: smangrul/ultrachat-10k-chatml
• Training: 1 epoch fine-tuning
Lora results:

[A{'eval_loss': 1.738003134727478, 'eval_runtime': 83.4676, 'eval_samples_per_second': 21.949, 'eval_steps_per_second': 2.744, 'eval_entropy': 1.7323376929395584, 'eval_num_tokens': 8800671.0, 'eval_mean_token_accuracy': 0.5938055174319504, 'epoch': 1.0}
{'train_runtime': 1227.0524, 'train_samples_per_second': 7.41, 'train_steps_per_second': 0.117, 'train_loss': 1.7760656763623643, 'epoch': 1.0}

Ours results:

[A{'eval_loss': 1.7559425830841064, 'eval_runtime': 57.0015, 'eval_samples_per_second': 32.139, 'eval_steps_per_second': 4.017, 'eval_entropy': 1.7077645209158352, 'eval_num_tokens': 8800671.0, 'eval_mean_token_accuracy': 0.592191962956341, 'epoch': 1.0}
{'train_runtime': 879.3554, 'train_samples_per_second': 10.341, 'train_steps_per_second': 0.163, 'train_loss': 1.792860364580488, 'epoch': 1.0}

Observation: Compared to lora, lorampo method consumes less time with similar performance.

@lpyhdzx
Copy link
Author

lpyhdzx commented Oct 27, 2025

Hi, could a maintainer please approve and run the pending workflows for this PR? They’re currently blocked with “2 workflows awaiting approval”. Thanks!

Copy link
Member

@BenjaminBossan BenjaminBossan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for this PR to add MPO to PEFT. Just for clarification, this is the corresponding paper and you are the first author, is that right?

I read some parts of the paper and although the mathematics are a bit outside my area of expertise, I think I get the general idea. However, what is unclear to me is how this approach relates to LoRA. This implementation is basically a small variation to LoRA, with a different initialization and a different forward path. But the paper (because of its age) doesn't mention LoRA at all. Is there a follow up paper to explain the relationship or could you please explain here how it's possible to express MPO in terms of LoRA?

Since this method is a LoRA variant, I would like you to implement it in terms of a LoraVariant subclass. To see how that works, check for instance this PR.

Moreover, before we can proceed, please make the following changes:

  1. The initialization has a dependency on matrix2mpo_plus. However, we don't want to add another dependency to PEFT, especially not to small packages like this. Would it be possible to include the required code into mpo_shape_calculator.py (let's rename it to mpo_utils.py then), aka to vendor the code?
  2. There are a few comments in Chinese, could you please translate to English?
  3. Regarding the included example: Let's not edit the existing one but rather add a separate example.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants